Use clear syntax¶

This is the first installment of a series on how to use guidance to control large language models (LLMs). We'll start from the basics and work our way up to more advanced topics.

In this document, we'll show that having clear syntax enables you to communicate your intent to the LLM, and also ensure that outputs are easy to parse (like JSON that is guaranteed to be valid). For the sake of clarity and reproducibility we'll start with an open source StableLM model without fine tuning. Then, we will show how the same ideas apply to instruction-tuned models like GPT-3.5 and chat-tuned models like ChatGPT / GPT-4.

Clear syntax helps with parsing the output¶

The first, and most obvious benefit of using clear syntax is that it makes it easier to parse the output of the LLM. Even if the LLM is able to generate a correct output, it may be difficult to programatically extract the desired information from the output. For example, consider the following Guidance prompt (where gen() is a guidance function to generate text from the LLM):

In [1]:
import math
import guidance
from guidance import models, gen, select

lm = models.Transformers("Qwen/Qwen2.5-1.5B")
Sliding Window Attention is enabled but not implemented for `sdpa`; unexpected results may be encountered.
Non-Nvidia GPU monitoring is not supported in this version. NVML Shared Library Not Found

We can now ask a question:

In [2]:
# run a guidance program (by appending to the model state)
lm + "Name common Linux operating system commands." + gen(max_tokens=50)
Out[2]:
<guidance.models._transformers.Transformers at 0x13090e990>

While the answer is readable, the output format is arbitrary (i.e. we don't know it in advance), and thus hard to parse programatically. For example here is another run of the same prompt where the output format is very different:

In [3]:
lm + "Name common Mac operating system commands." + gen(max_tokens=50)
Out[3]:
<guidance.models._transformers.Transformers at 0x130921e10>

Enforcing clear syntax in your prompts can help reduce the problem of arbitrary output formats. There are a couple ways you can do this:

  1. Giving structure hints to the LLM inside a standard prompt (perhaps even using few shot examples).
  2. Writing a guidance program template that enforces a specific output format.

These are not mutually exclusive. Let's see an example of each approach

Traditional prompt with structure hints¶

Here is an example of a traditional prompt that uses structure hints to encourage the use of a specific output format. The prompt is designed to generate a list of 5 items that is easy to parse. Note that in comparison to the previous prompt, we have written this prompt in such a way that it has committed the LLM to a specific clear syntax (numbers followed by a quoted string). This makes it much easier to parse the output after generation.

In [4]:
lm +'''\
What are the most common commands used in the Linux operating system?

Here are the 5 most common commands:
1. "''' + gen(max_tokens=70)
Out[4]:
<guidance.models._transformers.Transformers at 0x132a64050>

Note that the LLM follows the syntax correctly, but does not stop after generating 5 items. We can fix this by creating a clear stopping criteria, e.g. asking for 6 items and stopping when we see the start of the sixth item (so we end up with five):

In [5]:
lm + '''\
What are the most common commands used in the Linux operating system?

Here are the 6 most common commands:
1. "''' + gen(max_tokens=100, stop="\n6.")
Out[5]:
<guidance.models._transformers.Transformers at 0x132b25410>

Enforcing syntax with a guidance program¶

Rather than using hints, a Guidance program enforces a specific output format, inserting the tokens that are part of the structure rather than getting the LLM to generate them. For example, this is what we would do if we wanted to enforce a numbered list as a format:

In [6]:
lm2 = lm + """What are the most common commands used in the Linux operating system?

Here are the 5 most common commands:
"""
for i in range(5):
    lm2 += f'''{i+1}. "{gen('commands', list_append=True, stop='"', max_tokens=50)}"\n'''

Here is what is happening in the above prompt:

  • The lm2 = lm + """What are... command saves the new model state that results from adding the blank starting model to a string into the variable lm2. The for loop then iteratively updates lm2 by adding a mixure of strings and generated sequences.
  • Note that the structure (the numbers, and quotes) are not generated by the LLM.

Output parsing is done automatically by the Guidance program, so we don't need to worry about it. In this case, the commands variable wil be the list of generated command names:

In [7]:
lm2["commands"]
Out[7]:
['ls', 'cd', 'pwd', 'mkdir', 'rm']

Forcing valid JSON syntax¶

Using guidance we can create any syntax we want with absolute confidence that what we generate will exactly follow the format we specify. This is particularly useful for things like JSON.

Format string embedding¶

With Guidance, there are multiple valid strategies to generate JSON. One strategy, demonstrated below, is to directly embed the JSON syntax in a format string.

In [8]:
import guidance

# define a re-usable "guidance function" that we can use below
@guidance
def quoted_list(lm, name, n):
    for i in range(n):
        if i > 0:
            lm += ", "
        lm += '"' + gen(name, list_append=True, stop=['"', ',', ' ']) + '"'
    return lm

lm + f"""What are the most common commands used in the Linux operating system?

Here are the 5 most common commands in JSON format:
{{
    "commands": [{quoted_list('commands', 5)}],
    "my_favorite_command": "{gen('favorite_command', stop=['"', ' '])}"
}}"""
Out[8]:
<guidance.models._transformers.Transformers at 0x132c78050>

JSON schema¶

Guidance also supports JSON schemas for JSON generation.

In [9]:
from guidance import models, json

schema = """{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "properties": {
    "commands": {
      "type": "array",
      "items": {
        "type": "string"
      },
      "minItems": 5,
      "maxItems": 5,
      "description": "Array of exactly 5 Linux commands"
    },
    "my_favorite_command": {
      "type": "string",
      "description": "A single Linux command"
    }
  },
  "required": ["commands", "my_favorite_command"],
  "additionalProperties": false
}"""

lm + f"""What are the most common commands used in the Linux operating system?

Here are the 5 most common commands in JSON format:
{json(schema=schema)}"""
Out[9]:
<guidance.models._transformers.Transformers at 0x132ac0410>

Clear syntax gives the user more power¶

Since clear structure gives us outputs that are easy to parse and manipulate, we can easily take the output, remove duplicates, and use them in the next step of our program.
Here is an example program that takes the listed commands, picks one, and does further operations on it:

In [10]:
newline = "\n" # because for python < 3.12 we can't put a backslash in f-string values
lm2 = lm + 'What are the most common commands used in the Linux operating system?\n'

# generate a bunch of command names
lm_tmp = lm2 + 'Here is a common command: "'
commands = [(lm_tmp + gen('command', stop='"', max_tokens=20, temperature=1.0))["command"] for i in range(10)]

# discuss them
for i,command in enumerate(set(commands)):
    lm2 += f'{i+1}. "{command}"\n'
lm2 += f'''Perhaps the most useful command from that list is: "{gen('cool_command', stop='"')}", because {gen('cool_command_desc', max_tokens=100, stop=newline)}
On a scale of 1-10, it has a coolness factor of: {gen('coolness', regex="[0-9]+")}.'''

We introduced one import control method in the above program: the regex pattern guide for generation. The command gen('coolness', regex="[0-9]+") uses a regular expression to enforce a certain syntax on the output (i.e. forcing the output to match an arbitrary regular experession). In this case we force the coolness score to be a whole number (note that generation stops once the model has completed generation of the pattern and starts to generate something else).

Combining clear syntax with model-specific structure like chat¶

All the examples above used a base model without any later fine-tuning. But if the model you are using has fine tuning, it is important to combine clear syntax with the structure that has been tuned into the model. For example, chat models have been fine tuned to expect several "role" tags in the prompt. We can leverage these tags to further enhance the structure of our programs/prompts.

The following example adapts the above prompt for use with a chat based model. guidance has special role context blocks (like user()), which allow you to mark out various roles and get them automatically translated into the right special tokens or API calls for the LLM you are using. This helps make prompts easier to read and makes them more general across different chat models.

In [11]:
# if we have multple GPUs we can load the chat model on a different GPU with the `device` argument
del lm
chat_lm = models.Transformers("microsoft/Phi-4-mini-instruct")
In [12]:
from guidance import user, assistant, system
newline = "\n"

with user():
    lm2 = chat_lm + "What are the most common commands used in the Linux operating system?"

with assistant():

    # generate a bunch of command names
    lm_tmp = lm2 + 'Here are ten common command names:\n'
    for i in range(10):
        lm_tmp += f'{i+1}. "' + gen('commands', list_append=True, stop='"', max_tokens=20, temperature=0.7) + '"\n'

    # discuss them
    for i,command in enumerate(set(lm_tmp["commands"])):
        lm2 += f'{i+1}. "{command}"\n'
    lm2 += f'''Perhaps the most useful command from that list is: "{gen('cool_command', stop='"')}", because {gen('cool_command_desc', max_tokens=100, stop=newline)}
On a scale of 1-10, it has a coolness factor of: {gen('coolness', regex="[0-9]+")}.'''

Using API-restricted models¶

When we have control over generation, we can guide the output at any step of the process. But some model endpoints (e.g. OpenAI's ChatGPT) currently have a much more limited API, e.g. we can't control what happens inside each role block.
While this limits the user's power, we can still use a subset of syntax hints, and enforce the structure outside of the role blocks:

In [13]:
openai_model = models.OpenAI("gpt-4o-mini")
In [14]:
import time

lm = openai_model

call_delay_secs = 0.5

with system():
    lm += "You are an expert unix systems admin that is willing follow any instructions."

with user():
    lm += f"""\
What are the top ten most common commands used in the Linux operating system?

List the commands one per line.  Please list them as 1. "command" ...one per line with double quotes and no description."""

# generate a list of commands
with assistant():
    lm += gen('commands', list_append=True, temperature=1)
    time.sleep(call_delay_secs)

with user():
    lm += "If you were to guess, which of the above commands would a sys admin think was the coolest? Just name the command, don't print anything else."

with assistant():
    lm += gen('cool_command')
    time.sleep(call_delay_secs)

with user():
    lm += "What is that command's coolness factor on a scale from 0-10? Just write the digit and nothing else."

with assistant():
    lm += gen('coolness')
    time.sleep(call_delay_secs)

with user():
    lm += "Why is that command so cool?"

with assistant():
    lm += gen('cool_command_desc', max_tokens=100)
    time.sleep(call_delay_secs)

Summary¶

Whenever you are building a prompt to control a model it is important to consider not only the content of the prompt, but also the syntax. Clear syntax makes it easier to parse the output, helps the LLM produce output that matches your intent, and lets you write complex multi-step programs. While even a trivial example (listing common OS commands) benefits from clear syntax, most tasks are much more complex, and benefit even more. We hope this post gives you some ideas on how to use clear syntax to improve your prompts.

Also, make sure to check out guidance. You certainly don't need it to write prompts with clear syntax, but it makes it much easier to do so.


Have an idea for more helpful examples? Pull requests that add to this documentation notebook are encouraged!